Accelerating Vertical Federated Learning

نویسندگان

چکیده

Privacy, security and data governance constraints rule out a brute force process in the integration of cross-silo data, which inherits development Internet Things. Federated learning is proposed to ensure that all parties can collaboratively complete training task while not local. Vertical federated specialization for distributed features. To preserve privacy, homomorphic encryption applied enable encrypted operations without decryption. Nevertheless, together with robust guarantee, brings extra communication computation overhead. In this paper, we analyze current bottlenecks vertical under comprehensively numerically. We propose straggler-resilient computation-efficient accelerating system reduces overhead heterogeneous scenarios by 65.26% at most caused 40.66% most. Our improve robustness efficiency framework loss security.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Federated Multi-Task Learning

Federated learning poses new statistical and systems challenges in training machinelearning models over distributed networks of devices. In this work, we show thatmulti-task learning is naturally suited to handle the statistical challenges of thissetting, and propose a novel systems-aware optimization method, MOCHA, that isrobust to practical systems issues. Our method and theor...

متن کامل

Entity Resolution and Federated Learning get a Federated Resolution

Consider two data providers, each maintaining records of different feature sets about common entities. They aim to learn a linear model over the whole set of features. This problem of federated learning over vertically partitioned data includes a crucial upstream issue: entity resolution, i.e. finding the correspondence between the rows of the datasets. It is well known that entity resolution, ...

متن کامل

Federated Meta-Learning for Recommendation

Recommender systems have been widely studied from the machine learning perspective, where it is crucial to share information among users while preserving user privacy. In this work, we present a federated meta-learning framework for recommendation in which user information is shared at the level of algorithm, instead of model or data adopted in previous approaches. In this framework, user-speci...

متن کامل

Accelerating Competitive Learning Graph Quantization

Vector quantization(VQ) is a lossy data compression technique from signal processing for which simple competitive learning is one standard method to quantize patterns from the input space. Extending competitive learning VQ to the domain of graphs results in competitive learning for quantizing input graphs. In this contribution, we propose an accelerated version of competitive learning graph qua...

متن کامل

Accelerating Deep Learning with Memcomputing

Restricted Boltzmann machines (RBMs) and their extensions, often called “deep-belief networks”, are very powerful neural networks that have found widespread applicability in the fields of machine learning and big data. The standard way to training these models resorts to an iterative unsupervised procedure based on Gibbs sampling, called “contrastive divergence”, and additional supervised tunin...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Transactions on Big Data

سال: 2022

ISSN: ['2372-2096', '2332-7790']

DOI: https://doi.org/10.1109/tbdata.2022.3192898